A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation

نویسندگان

  • Dongdong Chen
  • Jian Cheng Lv
  • Zhang Yi
چکیده

The local neighborhood selection plays a crucial role for most representation based manifold learning algorithms. This paper reveals that an improper selection of neighborhood for learning representation will introduce negative components in the learnt representations. Importantly, the representations with negative components will affect the intrinsic manifold structure preservation. In this paper, a local non-negative pursuit (LNP) method is proposed for neighborhood selection and non-negative representations are learnt. Moreover, it is proved that the learnt representations are sparse and convex. Theoretical analysis and experimental results show that the proposed method achieves or outperforms the state-of-the-art results on various manifold learning problems. Introduction Manifold learning is generally to learn a data representation which can uncovers the intrinsic manifold structure. It is extensively used in machine learning, pattern recognition and computer vision (Wright et al. 2009; Lv et al. 2009; Liu, Lin, and Yu 2010; Cai et al. 2011). To learn a data representation, the neighbors of a data point should be selected in advance. There are many neighborhood selection methods, which can be divided into two categories: K nearest neighbors (KNN) methods and `1 norm minimization methods. Accordingly, the representation learning is divided into: KNN-based learning algorithms, such as Locally Linear Embedding (LLE, (Roweis and Saul 2000)), Laplacian eigenmaps (LEM, (Belkin and Niyogi 2003)); and `1 based learning algorithms, such as Sparse Manifold Clustering Embedding (SMCE, (Wright et al. 2009)). However, the Knn method is heuristic. It is not easy to select proper neighbors of a data point in practical applications. On the other hand, the working mechanism of `1 based methods has not been fully elucidated (Zhang, Yang, and Feng 2011). The solution of `1 norm minimization does’t indicate the space distribution feature of the samples. More importantly, the representations learnt by these algorithms cannot avoid the existence of negative compoCopyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. nents. The representations with negative components cannot correctly reflect the essential relations between data pairs. The intrinsic structure of the data would be broken. Hoyer (Hoyer 2002) proposed Non-negative Sparse Coding (NSC) to learn sparse non-negative representations; recently, Zhuang et al. (Zhuang et al. 2012) also proposed non-negative low rank and sparse representations for semisupervised learning. Unfortunately, the following concerns haven’t been discussed. Why do the negative components of learnt representation exist and what is the influence of it on the intrinsic manifold structure preservation? In this paper, we firstly reveals that an improper neighborhood selection will result in the existence of negative components of learnt representations. It is illustrated that the representations with negative components will destroy the intrinsic manifold structure. In order to avoid the existence of negative components and well preserve the intrinsic structure of the data, a local non-negative pursuit (LNP) is proposed to select the neighbors and learn the non-negative representations. The selected neighbors construct a convex set so that non-negative affine representations are learnt. Further, we have proves the representations are sparse and nonnegative, which are useful to manifold dimensionality estimation and intrinsic manifold structure preservation. Theoretical analysis and experimental results show that the proposed method achieves or outperforms the state-of-the-art results on various manifold learning problems. Notations and Preliminaries A = {ai ∈ R}i=1 is the data set, which lies on a manifold M of intrinsic dimension d( m). A = [a1, a2, · · · , an] is the matrix form of A. Generally, the representation learning for a manifold involves to solve the following problem: A = AX, (1) where X = [x1, x2, · · · , xn] ∈ Rn×n is the representation matrix of A. Accordingly, xi = [xi1, xi2, · · · , xin] is the representation of ai ∈ A (i = 1, 2, · · · , n). X should preserves the intrinsic structure ofM . With different purposes, various constraints could be added on X so that some particular representations can be obtained, such as sparse representation. In this paper, three famous representation learning algorithms (LLE, LEM, and SMCE) will be used to compare Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Geometry Preserving Kernel over Riemannian Manifolds

Abstract- Kernel trick and projection to tangent spaces are two choices for linearizing the data points lying on Riemannian manifolds. These approaches are used to provide the prerequisites for applying standard machine learning methods on Riemannian manifolds. Classical kernels implicitly project data to high dimensional feature space without considering the intrinsic geometry of data points. ...

متن کامل

Label-Informed Non-negative Matrix Factorization with Manifold Regularization for Discriminative Subnetwork Detection

In this paper, we present a novel method for obtaining a low dimensional representation of a complex brain network that: (1) can be interpreted in a neurobiologically meaningful way, (2) emphasizes group differences by accounting for label information, and (3) captures the variation in disease subtypes/severity by respecting the intrinsic manifold structure underlying the data. Our method is a ...

متن کامل

Piecewise-Linear Manifold Learning

The need to reduce the dimensionality of a dataset whilst retaining inherent manifold structure is key in many pattern recognition, machine learning and computer vision tasks. This process is often referred to as manifold learning since the structure is preserved during dimensionality reduction by learning the intrinsic low-dimensional manifold that the data lies on. Since the inception of mani...

متن کامل

Geodesic Learning Algorithms Over Flag Manifolds

Recently manifold structures have attracted attentions in two folds in the machine learning literature. One is in the manifold learning problem, that is learning the intrinsic manifold structure in high dimensional datasets. Another is in the information geometric approach to learning – exploiting the geometry of the parameter space of learning machines such as neural networks for improving con...

متن کامل

Quality Improvement of Non-manifold Hexahedral Meshes for Critical Feature Determination of Microstructure Materials

This paper describes a novel approach to improve the quality of non-manifold hexahedral meshes with feature preservation for microstructure materials. In earlier works, we developed an octreebased isocontouring method to construct unstructured hexahedral meshes for domains with multiple materials by introducing the notion of material change edge to identify the interface between two or more mat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014